8 research outputs found

    Virtual Rephotography: Novel View Prediction Error for 3D Reconstruction

    Full text link
    The ultimate goal of many image-based modeling systems is to render photo-realistic novel views of a scene without visible artifacts. Existing evaluation metrics and benchmarks focus mainly on the geometric accuracy of the reconstructed model, which is, however, a poor predictor of visual accuracy. Furthermore, using only geometric accuracy by itself does not allow evaluating systems that either lack a geometric scene representation or utilize coarse proxy geometry. Examples include light field or image-based rendering systems. We propose a unified evaluation approach based on novel view prediction error that is able to analyze the visual quality of any method that can render novel views from input images. One of the key advantages of this approach is that it does not require ground truth geometry. This dramatically simplifies the creation of test datasets and benchmarks. It also allows us to evaluate the quality of an unknown scene during the acquisition and reconstruction process, which is useful for acquisition planning. We evaluate our approach on a range of methods including standard geometry-plus-texture pipelines as well as image-based rendering techniques, compare it to existing geometry-based benchmarks, and demonstrate its utility for a range of use cases.Comment: 10 pages, 12 figures, paper was submitted to ACM Transactions on Graphics for revie

    Multi-view Photometric Stereo Using a Normal Consistency Approach

    No full text
    Scene reconstruction is one of the important problems in computer vision. One approach to scene reconstruction is Photometric Stereo. There the main idea is that we can glean the surface orientation from changes in pixel intensity values. These changes are induced by varying the illumination while keeping the viewpoint fixed. If we want to reconstruct a complete scene with photometric stereo we have to drop the fixed viewpoint assumption and the pixel correspondences are not trivial anymore. In this work we present a novel normal consistency metric for points in 3D space which enables us to find points on the surface without having to explicitly model the pixel correspondences - the problem of pixel correspondences is solved implicitly. We obtain a set of oriented points in a volumetric grid from which a surface can be easily reconstructed. The proposed algorithm thus combines the advantages of classic photometric stereo and multi-view reconstruction methods. It automatically reconstructs a triangle mesh from input images with known viewpoints and illumination directions. If the scene is sampled densely enough the proposed approach is robust against self-occlusion, shadowing and isolated specular highlights

    Multi-view Photometric Stereo Using a Normal Consistency Approach

    No full text
    Scene reconstruction is one of the important problems in computer vision. One approach to scene reconstruction is Photometric Stereo. There the main idea is that we can glean the surface orientation from changes in pixel intensity values. These changes are induced by varying the illumination while keeping the viewpoint fixed. If we want to reconstruct a complete scene with photometric stereo we have to drop the fixed viewpoint assumption and the pixel correspondences are not trivial anymore. In this work we present a novel normal consistency metric for points in 3D space which enables us to find points on the surface without having to explicitly model the pixel correspondences - the problem of pixel correspondences is solved implicitly. We obtain a set of oriented points in a volumetric grid from which a surface can be easily reconstructed. The proposed algorithm thus combines the advantages of classic photometric stereo and multi-view reconstruction methods. It automatically reconstructs a triangle mesh from input images with known viewpoints and illumination directions. If the scene is sampled densely enough the proposed approach is robust against self-occlusion, shadowing and isolated specular highlights

    Massively Parallel Implementation of a Multi-View Stereo Algorithm

    No full text
    This thesis deals with the implementation of a multi-view stereo algorithm in a massively parallel way. The nature of multi-view stereo algorithms favors a parallel approach, as these algorithms operate on multiple images and in these images on multiple pixels. This work focuses on the matching optimization, of a region growing technique, in the multiview stereo approach by Goesele et al. which is the most computationally intensive step of the algorithm. Starting from a sparse 3D point cloud reconstructed using structure-from-motion methods, a nonlinear optimization is employed using a photoconsistency measure. Building upon an existing CPU implementation, an implementation for the matching optimization is written for NVIDIA graphics hardware which is essentially a massively parallel computing device. To access the hardware we make use of the Compute Unified Device Architecture (CUDA), a technology for General-Purpose computation on GPUs (GPGPU). Subsequently the GPU implementation is compared with the CPU version with regard to reconstruction quality and processing time. Furthermore different optimizations for exploiting the GPU architecture (and therefore reducing the processing time) are discussed and evaluated
    corecore